19 research outputs found

    Active Vision With Multiresolution Wavelets

    Get PDF
    A wavelet decomposition for multiscale edge detection is used to separate border edges from texture in an image, toward the goal of a complete segmentation by Active Perception for robotic exploration of a scene. The physical limitations of the image acquisition system and the robotic system provide the limitations on the range of scales which we consider. We link edges through scale space, using the characteristics of these wavelets for guidance. The linked zero crossings are used to remove texture and preserve borders, then the scene can be reconstructed without texture

    Supervised semantic labeling of places using information extracted from sensor data

    Get PDF
    Indoor environments can typically be divided into places with different functionalities like corridors, rooms or doorways. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating interaction with humans. As an example, natural language terms like “corridor” or “room” can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from sensor range data into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. In this case we additionally use as features objects extracted from images. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation method. Alternatively, we apply associative Markov networks to classify geometric maps and compare the results with a relaxation approach. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments

    A Local Optimization Framework for Multi-Objective Ergodic Search

    Full text link
    Robots have the potential to perform search for a variety of applications under different scenarios. Our work is motivated by humanitarian assistant and disaster relief (HADR) where often it is critical to find signs of life in the presence of conflicting criteria, objectives, and information. We believe ergodic search can provide a framework for exploiting available information as well as exploring for new information for applications such as HADR, especially when time is of the essence. Ergodic search algorithms plan trajectories such that the time spent in a region is proportional to the amount of information in that region, and is able to naturally balance exploitation (myopically searching high-information areas) and exploration (visiting all locations in the search space for new information). Existing ergodic search algorithms, as well as other information-based approaches, typically consider search using only a single information map. However, in many scenarios, the use of multiple information maps that encode different types of relevant information is common. Ergodic search methods currently do not possess the ability for simultaneous nor do they have a way to balance which information gets priority. This leads us to formulate a Multi-Objective Ergodic Search (MOES) problem, which aims at finding the so-called Pareto-optimal solutions, for the purpose of providing human decision makers various solutions that trade off between conflicting criteria. To efficiently solve MOES, we develop a framework called Sequential Local Ergodic Search (SLES) that converts a MOES problem into a "weight space coverage" problem. It leverages the recent advances in ergodic search methods as well as the idea of local optimization to efficiently approximate the Pareto-optimal front. Our numerical results show that SLES runs distinctly faster than the baseline methods.Comment: Robotics: Science and Systems 202

    Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery

    No full text
    We present a preliminary report for xBD, a new large-scale dataset for the advancement of change detection and building damage assessment for humanitarian assistance and disaster recovery research.Logistics, resource planning, and damage estimation are difficult tasks after a disaster, and putting first responders into post-disaster situations is dangerous and costly.Using passive methods, such as analysis on satellite imagery, to perform damage assessment saves manpower, lowers risk, and expedites an otherwise dangerous process.xBD provides pre- and post-event multi-band satellite imagery from a variety of disaster events with building polygons, classification labels for damage types, ordinal labels of damage level, and corresponding satellite metadata.Furthermore, the dataset contains bounding boxes and labels for environmental factors such as fire, water, and smoke.xBD will be the largest building damage assessment dataset to date, containing \sim700,000 building annotations across over 5,000 km\textsuperscript{2} of imagery from 15 countries.</p
    corecore